Five ways AI is helping scammers steal crypto
Deepfake endorsements, AI-generated phishing, and voice-cloning scams are making it harder than ever to tell real from fake

You get a WhatsApp message from your financial advisor, a voice note confirming an urgent crypto transfer. It sounds just like them – but it’s not. AI-powered scams are fooling even seasoned investors, from deepfake videos to phishing messages so convincing they could come from a Fortune 500 company.
Authorities are taking note. Recently, the FBI warned that criminals are embedding AI-powered chatbots into fraudulent websites and using generative AI to craft realistic phishing scams.
AI isn’t just advancing crypto – it’s also giving scammers new ways to steal funds. Let’s look at five ways artificial intelligence is being used to target investors – and how to stay ahead of these evolving threats.
1. AI-powered phishing
Phishing remains one of the most prevalent scams in crypto, and AI is taking it to a whole new level. Last year alone, phishing attacks led to losses of approximately $494 million across the web3 ecosystem, according to Scam Sniffer.
Unlike traditional phishing, which often contains poor grammar, strange wording, or formatting errors, the AI-generated messages are polished, highly personalized, and incredibly convincing.
Cybercriminals use AI to craft phishing emails and direct messages that mirror those from legitimate platforms and firms. Victims may receive what looks like official communications, only to be directed to fake websites designed to steal their credentials.
These scams are more difficult to detect because they look professional. Even experienced crypto users can fall victim, making vigilance more crucial than ever.
2. AI voice-cloning
AI is now capable of mimicking human voices with alarming realism and accuracy, making phone and voice-based scams more convincing than ever. In one recent case, an investor in Hong Kong lost HK$145 million (around US$18.6 million) after being tricked by an AI-generated voice. The scammer impersonated a financial manager from what the victim believed to be a reputable firm.
The victim wanted to purchase crypto mining equipment. During negotiations over WhatsApp, they received voice instructions for the transaction, leading them to transfer millions in USDT to a wallet controlled by the scammer.
AI-powered voice-cloning software requires just a few seconds of recorded speech to generate an accurate replica of someone’s voice. With AI making identity fraud more sophisticated, relying on voice verification alone is no longer safe.
3. Deepfake scams
AI-generated deepfake videos are becoming a powerful tool for scammers, allowing them to impersonate well-known figures to promote fraudulent projects. According to AI firm Sensity, Elon Musk is the most frequently featured figure in these scams. However, he is not the only one.
A recently uncovered scam involved Skameri, a fraudulent trading platform that used AI-generated videos of British financial commentator Martin Lewis to lure victims. The scam, which promised lucrative returns through automated crypto trading, circulated widely on social media, with many users reporting fabricated testimonials of Lewis seemingly endorsing the platform – despite having no connection to it. In July 2023, Lewis took to X to warn the public about the circulating deepfake video, calling it a scam.
WARNING. THIS IS A SCAM BY CRIMINALS TRYING TO STEAL MONEY. PLS SHARE.
– Martin Lewis (@MartinSLewis) July 6, 2023
This is frightening, it's the first deep fake video scam I've seen with me in it. Govt & regulators must step up to stop big tech publishing such dangerous fakes. People'll lose money and it'll ruin lives. https://t.co/ZzaBELg1kg
He later emphasized, “All I can say to the public is that if you see any celebrity advert on social media (or arguably any ads at all) you should assume it’s a scam until you have direct corroboration from a trusted source that isn’t.”
Deepfakes have become so realistic that even tech-savvy crypto investors can be fooled. Because AI is improving so quickly, these videos are becoming more difficult to distinguish from real footage.
4. Automating scam operations
AI is not just making scams more convincing – it’s allowing cybercriminals to scale their operations at an unprecedented rate. With large language models (LLMs) and generative AI, scammers can quickly come up with legitimate-looking emails that are relevant and convincing. They can also automate phishing campaigns, sending personalized scam messages to thousands or even millions of potential victims at once.
Some scammers also use AI to optimize fake websites for search engines. According to Elliptic, one scam-as-a-service provider claimed to use AI for designing fraudulent interfaces on websites, leveraging search engine optimization (SEO) tactics to make their scams more visible.
Because AI automates so much of the scamming process, fraudsters can launch more attacks, refine their tactics quickly, and disappear before authorities catch up.
5. AI-powered fake investment schemes
Scammers are using AI to create highly convincing investment schemes, complete with fake testimonials, AI-generated trading bots, and professional-looking marketing materials. These scams often promise guaranteed returns, using AI-generated social media content and chatbot-driven customer service to appear legitimate.
Some of these fake schemes go as far as showing fabricated trading results or deepfake videos of well-known investors endorsing their platforms. Combined with AI-generated phishing messages, these scams can be extremely difficult to detect.
How to stay ahead of AI-powered scams
AI is making crypto scams more deceptive and harder to detect. As fraudsters refine their tactics, vigilance is essential.
Be wary of unsolicited investment offers, guaranteed profit schemes, and urgent messages—they’re classic red flags. Always verify unexpected crypto-related requests through official channels before taking action.
No security measure is foolproof, but awareness and smart security habits are your best defense. In an era of AI-driven deception, staying cautious isn’t just advisable—it’s essential.